在本文中,我们提出了一种新的马尔可夫决策过程学习分层表示的方法。我们的方法通过将状态空间划分为子集,并定义用于在分区之间执行转换的子任务。我们制定将状态空间作为优化问题分区的问题,该优化问题可以使用梯度下降给出一组采样的轨迹来解决,使我们的方法适用于大状态空间的高维问题。我们经验验证方法,通过表示它可以成功地在导航域中成功学习有用的分层表示。一旦了解到,分层表示可以用于解决给定域中的不同任务,从而概括跨任务的知识。
translated by 谷歌翻译
基于宽度的搜索方法在广泛的测试平台中显示了最先进的性能,从经典计划问题到基于图像的模拟器,例如Atari游戏。这些方法刻度独立于状态空间的大小,但在问题宽度中指数呈指数。在实践中,运行宽度大于1的算法是计算难以解决的,禁止IW解决更高的宽度问题。在本文中,我们介绍了一个分层算法,该算法在两个抽象级别中计划。高级计划者使用从低级修剪决策中逐步发现的抽象功能。我们在经典规划PDDL域中以及基于像素的模拟器域中说明了该算法。在古典规划中,我们展示了IW(1)在两个级别的抽象中如何解决宽度2的问题。对于基于像素的域,我们展示了如何结合学习的策略和学习价值函数,所提出的分层IW可以胜过目前具有稀疏奖励的Atari游戏的扁平IW策划者。
translated by 谷歌翻译
The distributed representation of symbols is one of the key technologies in machine learning systems today, playing a pivotal role in modern natural language processing. Traditional word embeddings associate a separate vector with each word. While this approach is simple and leads to good performance, it requires a lot of memory for representing a large vocabulary. To reduce the memory footprint, the default embedding layer in spaCy is a hash embeddings layer. It is a stochastic approximation of traditional embeddings that provides unique vectors for a large number of words without explicitly storing a separate vector for each of them. To be able to compute meaningful representations for both known and unknown words, hash embeddings represent each word as a summary of the normalized word form, subword information and word shape. Together, these features produce a multi-embedding of a word. In this technical report we lay out a bit of history and introduce the embedding methods in spaCy in detail. Second, we critically evaluate the hash embedding architecture with multi-embeddings on Named Entity Recognition datasets from a variety of domains and languages. The experiments validate most key design choices behind spaCy's embedders, but we also uncover a few surprising results.
translated by 谷歌翻译
Recent successes of massively overparameterized models have inspired a new line of work investigating the underlying conditions that enable overparameterized models to generalize well. This paper considers a framework where the possibly overparametrized model includes fake features, i.e., features that are present in the model but not in the data. We present a non-asymptotic high-probability bound on the generalization error of the ridge regression problem under the model misspecification of having fake features. Our high-probability results characterize the interplay between the implicit regularization provided by the fake features and the explicit regularization provided by the ridge parameter. We observe that fake features may improve the generalization error, even though they are irrelevant to the data.
translated by 谷歌翻译
We focus on the continual learning problem where the tasks arrive sequentially and the aim is to perform well on the newly arrived task without performance degradation on the previously seen tasks. In contrast to the continual learning literature focusing on the centralized setting, we investigate the distributed estimation framework. We consider the well-established distributed learning algorithm \cocoa{}. We derive closed form expressions for the iterations for the overparametrized case. We illustrate the convergence and the error performance of the algorithm based on the over/under-parametrization of the problem. Our results show that depending on the problem dimensions and data generation assumptions, \cocoa{} can perform continual learning over a sequence of tasks, i.e., it can learn a new task without forgetting previously learned tasks, with access only to one task at a time.
translated by 谷歌翻译
Using 3D CNNs on high resolution medical volumes is very computationally demanding, especially for large datasets like the UK Biobank which aims to scan 100,000 subjects. Here we demonstrate that using 2D CNNs on a few 2D projections (representing mean and standard deviation across axial, sagittal and coronal slices) of the 3D volumes leads to reasonable test accuracy when predicting the age from brain volumes. Using our approach, one training epoch with 20,324 subjects takes 40 - 70 seconds using a single GPU, which is almost 100 times faster compared to a small 3D CNN. These results are important for researchers who do not have access to expensive GPU hardware for 3D CNNs.
translated by 谷歌翻译
Large annotated datasets are required to train segmentation networks. In medical imaging, it is often difficult, time consuming and expensive to create such datasets, and it may also be difficult to share these datasets with other researchers. Different AI models can today generate very realistic synthetic images, which can potentially be openly shared as they do not belong to specific persons. However, recent work has shown that using synthetic images for training deep networks often leads to worse performance compared to using real images. Here we demonstrate that using synthetic images and annotations from an ensemble of 10 GANs, instead of from a single GAN, increases the Dice score on real test images with 4.7 % to 14.0 % on specific classes.
translated by 谷歌翻译
Recent work shows that the expressive power of Graph Neural Networks (GNNs) in distinguishing non-isomorphic graphs is exactly the same as that of the Weisfeiler-Lehman (WL) graph test. In particular, they show that the WL test can be simulated by GNNs. However, those simulations involve neural networks for the 'combine' function of size polynomial or even exponential in the number of graph nodes $n$, as well as feature vectors of length linear in $n$. We present an improved simulation of the WL test on GNNs with \emph{exponentially} lower complexity. In particular, the neural network implementing the combine function in each node has only a polylogarithmic number of parameters in $n$, and the feature vectors exchanged by the nodes of GNN consists of only $O(\log n)$ bits. We also give logarithmic lower bounds for the feature vector length and the size of the neural networks, showing the (near)-optimality of our construction.
translated by 谷歌翻译
刚性对象的6D姿势的估计是计算机视觉中的一个基本问题。传统上,姿势估计与确定单一最佳估计有关。但是,单个估计无法表达视觉歧义,在许多情况下,由于对象对称或识别特征的阻塞,这在许多情况下是不可避免的。无法说明姿势的歧义可能会导致后续方法的失败,这是在失败成本高时无法接受的。完全姿势分布的估计与单个估计相反,非常适合表达姿势不确定性。由此激励,我们提出了一种新颖的姿势分布估计方法。对象姿势上概率分布的隐式公式来自对象的中间表示作为一组关键点。这样可以确保姿势分布估计值具有很高的解释性。此外,我们的方法基于保守近似,这导致可靠的估计。该方法已被评估在YCB-V和T-less数据集上旋转分布估计的任务,并在所有对象上可靠地执行。
translated by 谷歌翻译
深度学习培训是一个昂贵的过程,可广泛使用GPU,但并非所有模型训练都饱和现代强大的GPU。 Multi-Instance GPU(MIG)是NVIDIA引入的一项新技术,可以分区GPU,以更好地适合不需要所有内存和计算完整GPU的资源的工作负载。在本文中,我们研究了在深度学习工作负载下的三种尺寸工作负载下的MIG启用A100 GPU的性能,这些尺寸重点是使用Resnet模型进行图像识别培训。当在GPU允许的各种MIG实例上孤立运行时,我们还研究了这些工作负载的行为,此外还可以在同一GPU共同列入同类的同质实例上并行运行它们。我们的结果表明,当工作负载太小而无法孤立地利用整个GPU时,使用MIG可以显着改善GPU的利用率。通过并行训练多个小型型号,尽管每单位时间的时间增加了,但每单位时间的GPU可以执行更多的工作,导致$ \ sim $ \ sim $ 3倍吞吐量。相比之下,对于已经很好地利用了整个GPU的中型和大型工作量,MIG仅提供边际性能的改进。然而,我们观察到,使用单独的MIG分区并行的训练模型不会表现出强调具有MIG在现代GPU上具有功能的价值的干扰。
translated by 谷歌翻译